Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
iScience ; 26(9): 107635, 2023 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-37664636

RESUMEN

The increased amount of tertiary lymphoid structures (TLSs) is associated with a favorable prognosis in patients with lung adenocarcinoma (LUAD). However, evaluating TLSs manually is an experience-dependent and time-consuming process, which limits its clinical application. In this multi-center study, we developed an automated computational workflow for quantifying the TLS density in the tumor region of routine hematoxylin and eosin (H&E)-stained whole-slide images (WSIs). The association between the computerized TLS density and disease-free survival (DFS) was further explored in 802 patients with resectable LUAD of three cohorts. Additionally, a Cox proportional hazard regression model, incorporating clinicopathological variables and the TLS density, was established to assess its prognostic ability. The computerized TLS density was an independent prognostic biomarker in patients with resectable LUAD. The integration of the TLS density with clinicopathological variables could support individualized clinical decision-making by improving prognostic stratification.

2.
J Pers Med ; 13(5)2023 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-37241024

RESUMEN

The aim of this study was to investigate the value of 3D Statistical Shape Modelling for orthognathic surgery planning. The goal was to objectify shape variations in the orthognathic population and differences between male and female patients by means of a statistical shape modelling method. Pre-operative CBCT scans of patients for whom 3D Virtual Surgical Plans (3D VSP) were developed at the University Medical Center Groningen between 2019 and 2020 were included. Automatic segmentation algorithms were used to create 3D models of the mandibles, and the statistical shape model was built through principal component analysis. Unpaired t-tests were performed to compare the principal components of the male and female models. A total of 194 patients (130 females and 64 males) were included. The mandibular shape could be visually described by the first five principal components: (1) The height of the mandibular ramus and condyles, (2) the variation in the gonial angle of the mandible, (3) the width of the ramus and the anterior/posterior projection of the chin, (4) the lateral projection of the mandible's angle, and (5) the lateral slope of the ramus and the inter-condylar distance. The statistical test showed significant differences between male and female mandibular shapes in 10 principal components. This study demonstrates the feasibility of using statistical shape modelling to inform physicians about mandible shape variations and relevant differences between male and female mandibles. The information obtained from this study could be used to quantify masculine and feminine mandibular shape aspects and to improve surgical planning for mandibular shape manipulations.

3.
Med Phys ; 50(10): 6190-6200, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37219816

RESUMEN

BACKGROUND: Personalized treatment is increasingly required for oropharyngeal squamous cell carcinoma (OPSCC) patients due to emerging new cancer subtypes and treatment options. Outcome prediction model can help identify low or high-risk patients who may be suitable to receive de-escalation or intensified treatment approaches. PURPOSE: To develop a deep learning (DL)-based model for predicting multiple and associated efficacy endpoints in OPSCC patients based on computed tomography (CT). METHODS: Two patient cohorts were used in this study: a development cohort consisting of 524 OPSCC patients (70% for training and 30% for independent testing) and an external test cohort of 396 patients. Pre-treatment CT-scans with the gross primary tumor volume contours (GTVt) and clinical parameters were available to predict endpoints, including 2-year local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), disease-specific survival (DSS), overall survival (OS), and disease-free survival (DFS). We proposed DL outcome prediction models with the multi-label learning (MLL) strategy that integrates the associations of different endpoints based on clinical factors and CT-scans. RESULTS: The multi-label learning models outperformed the models that were developed based on a single endpoint for all endpoints especially with high AUCs ≥ 0.80 for 2-year RC, DMFS, DSS, OS, and DFS in the internal independent test set and for all endpoints except 2-year LRC in the external test set. Furthermore, with the models developed, patients could be stratified into high and low-risk groups that were significantly different for all endpoints in the internal test set and for all endpoints except DMFS in the external test set. CONCLUSION: MLL models demonstrated better discriminative ability for all 2-year efficacy endpoints than single outcome models in the internal test and for all endpoints except LRC in the external set.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias de Cabeza y Cuello , Neoplasias Orofaríngeas , Humanos , Carcinoma de Células Escamosas de Cabeza y Cuello , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/terapia , Tomografía Computarizada por Rayos X , Supervivencia sin Enfermedad , Neoplasias Orofaríngeas/diagnóstico por imagen , Neoplasias Orofaríngeas/terapia , Estudios Retrospectivos
4.
IEEE Trans Med Imaging ; 42(8): 2451-2461, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37027751

RESUMEN

Brain tumor segmentation (BTS) in magnetic resonance image (MRI) is crucial for brain tumor diagnosis, cancer management and research purposes. With the great success of the ten-year BraTS challenges as well as the advances of CNN and Transformer algorithms, a lot of outstanding BTS models have been proposed to tackle the difficulties of BTS in different technical aspects. However, existing studies hardly consider how to fuse the multi-modality images in a reasonable manner. In this paper, we leverage the clinical knowledge of how radiologists diagnose brain tumors from multiple MRI modalities and propose a clinical knowledge-driven brain tumor segmentation model, called CKD-TransBTS. Instead of directly concatenating all the modalities, we re-organize the input modalities by separating them into two groups according to the imaging principle of MRI. A dual-branch hybrid encoder with the proposed modality-correlated cross-attention block (MCCA) is designed to extract the multi-modality image features. The proposed model inherits the strengths from both Transformer and CNN with the local feature representation ability for precise lesion boundaries and long-range feature extraction for 3D volumetric images. To bridge the gap between Transformer and CNN features, we propose a Trans&CNN Feature Calibration block (TCFC) in the decoder. We compare the proposed model with six CNN-based models and six transformer-based models on the BraTS 2021 challenge dataset. Extensive experiments demonstrate that the proposed model achieves state-of-the-art brain tumor segmentation performance compared with all the competitors.


Asunto(s)
Neoplasias Encefálicas , Insuficiencia Renal Crónica , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo , Algoritmos , Calibración , Procesamiento de Imagen Asistido por Computador
5.
IEEE Trans Med Imaging ; 42(6): 1696-1706, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37018705

RESUMEN

Ultrasonography is an important routine examination for breast cancer diagnosis, due to its non-invasive, radiation-free and low-cost properties. However, the diagnostic accuracy of breast cancer is still limited due to its inherent limitations. Then, a precise diagnose using breast ultrasound (BUS) image would be significant useful. Many learning-based computer-aided diagnostic methods have been proposed to achieve breast cancer diagnosis/lesion classification. However, most of them require a pre-define region of interest (ROI) and then classify the lesion inside the ROI. Conventional classification backbones, such as VGG16 and ResNet50, can achieve promising classification results with no ROI requirement. But these models lack interpretability, thus restricting their use in clinical practice. In this study, we propose a novel ROI-free model for breast cancer diagnosis in ultrasound images with interpretable feature representations. We leverage the anatomical prior knowledge that malignant and benign tumors have different spatial relationships between different tissue layers, and propose a HoVer-Transformer to formulate this prior knowledge. The proposed HoVer-Trans block extracts the inter- and intra-layer spatial information horizontally and vertically. We conduct and release an open dataset GDPH&SYSUCC for breast cancer diagnosis in BUS. The proposed model is evaluated in three datasets by comparing with four CNN-based models and three vision transformer models via five-fold cross validation. It achieves state-of-the-art classification performance (GDPH&SYSUCC AUC: 0.924, ACC: 0.893, Spec: 0.836, Sens: 0.926) with the best model interpretability. In the meanwhile, our proposed model outperforms two senior sonographers on the breast cancer diagnosis when only one BUS image is given (GDPH&SYSUCC-AUC ours: 0.924 vs. reader1: 0.825 vs. reader2: 0.820).


Asunto(s)
Neoplasias de la Mama , Femenino , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Ultrasonografía , Ultrasonografía Mamaria , Diagnóstico por Computador/métodos
6.
Heliyon ; 9(2): e13694, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36852021

RESUMEN

Background: Manual segmentation of the inferior alveolar canal (IAC) in panoramic images requires considerable time and labor even for dental experts having extensive experience. The objective of this study was to evaluate the performance of automatic segmentation of IAC with ambiguity classification in panoramic images using a deep learning method. Methods: Among 1366 panoramic images, 1000 were selected as the training dataset and the remaining 336 were assigned to the testing dataset. The radiologists divided the testing dataset into four groups according to the quality of the visible segments of IAC. The segmentation time, dice similarity coefficient (DSC), precision, and recall rate were calculated to evaluate the efficiency and segmentation performance of deep learning-based automatic segmentation. Results: Automatic segmentation achieved a DSC of 85.7% (95% confidence interval [CI] 75.4%-90.3%), precision of 84.1% (95% CI 78.4%-89.3%), and recall of 87.7% (95% CI 77.7%-93.4%). Compared with manual annotation (5.9s per image), automatic segmentation significantly increased the efficiency of IAC segmentation (33 ms per image). The DSC and precision values of group 4 (most visible) were significantly better than those of group 1 (least visible). The recall values of groups 3 and 4 were significantly better than those of group 1. Conclusions: The deep learning-based method achieved high performance for IAC segmentation in panoramic images under different visibilities and was positively correlated with IAC image clarity.

7.
J Clin Periodontol ; 50(5): 591-603, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36734066

RESUMEN

AIM: To investigate the relationship between plant-based diet indices (PDIs) and periodontitis and serum IgG antibodies against periodontopathogens in the U.S. MATERIALS AND METHODS: We analysed cross-sectional data on 5651 participants ≥40 years of age from the Third National Health and Nutrition Examination Survey. Food frequency questionnaire data were used to calculate the overall PDI, healthy plant-based diet index (hPDI), and unhealthy plant-based diet index (uPDI). Periodontitis was defined using a half-reduced Centers for Disease Control and Prevention and American Academy of Periodontology case definition. Serum antibodies against 19 periodontopathogens were used to classify the population into two subgroups using hierarchical clustering. Survey-weighted multivariable logistic regressions were applied to assess the associations of PDI/hPDI/uPDI z-scores with periodontitis and hierarchical clusters after adjusting for potential confounders. RESULTS: A total of 2841 (50.3%) participants were defined as having moderate/severe periodontitis. The overall PDI z-score was not significantly associated with the clinical and bacterial markers of periodontitis. By considering the healthiness of plant foods, we observed an inverse association between hPDI z-score and periodontitis (odds ratio [OR] = 0.925, 95% confidence interval [CI]: 0.860-0.995). In contrast, higher uPDI z-score (adherence to unhealthful plant foods) might increase the risk of periodontitis (OR = 1.100; 95% CI: 1.043-1.161). Regarding antibodies against periodontopathogens, the participants in cluster 2 had higher periodontal antibodies than those in cluster 1. The hPDI z-score was positively associated with cluster 2 (OR = 1.192; 95% CI: 1.112-1.278). In contrast, an inverse association between uPDI z-score and cluster 2 was found (OR = 0.834; 95% CI: 0.775-0.896). CONCLUSIONS: Plant-based diets were associated with periodontitis, depending on their quality. A healthy plant-based diet was inversely related to an increased risk of periodontitis but positively related to elevated antibody levels against periodontopathogens. For an unhealthy plant-based diet, the opposite trends were observed.


Asunto(s)
Dieta Vegetariana , Periodontitis , Humanos , Encuestas Nutricionales , Estudios Transversales , Dieta , Periodontitis/epidemiología
8.
J Periodontol ; 94(2): 204-216, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35960608

RESUMEN

BACKGROUND: The association between periodontitis and allergic symptoms has been investigated. However, the difference in immune signatures between them remains poorly understood. This cross-sectional study assessed the relationship between serum immunoglobulin G (IgG) antibodies to periodontal pathogens and allergic symptoms in a nationwide population cohort. METHODS: Two phases of the Third National Health and Nutrition Examination Survey (NHANES III) were used as discovery dataset (n = 3700) and validation dataset (n = 4453), respectively. Based on the antibodies against 19 periodontal pathogens, we performed an unsupervised hierarchical clustering to categorize the population into three clusters. In the discovery dataset, cluster 1 (n = 2847) had the highest level of IgG antibodies, followed by clusters 2 (n = 588) and 3 (n = 265). Data on allergic symptoms (asthma, hay fever, and wheezing) were obtained using a self-reported questionnaire. Survey-weighted multivariable logistic regression evaluated the association between these clusters and allergic symptoms. RESULTS: In the discovery dataset, the participants with lower levels of antibodies to periodontal pathogens exhibited a higher risk of asthma (odds ratio [OR]cluster 3 vs. cluster 1 = 1.820, 95% confidence interval [CI]: 1.153-2.873) and wheezing (ORcluster 3 vs. cluster 1 = 1.550, 95% CI: 1.095-2.194) compared to those with higher periodontal antibodies, but the non-significant association with hay fever. Consistent results were found in the validation dataset. CONCLUSIONS: Serum IgG titers to periodontal pathogens were inversely associated with the risk of asthma and wheezing, suggesting the potentially protective role against allergic conditions.


Asunto(s)
Asma , Rinitis Alérgica Estacional , Humanos , Encuestas Nutricionales , Estudios Transversales , Ruidos Respiratorios , Anticuerpos Antibacterianos , Inmunoglobulina G
9.
J Transl Med ; 20(1): 595, 2022 12 14.
Artículo en Inglés | MEDLINE | ID: mdl-36517832

RESUMEN

BACKGROUND: Tumor histomorphology analysis plays a crucial role in predicting the prognosis of resectable lung adenocarcinoma (LUAD). Computer-extracted image texture features have been previously shown to be correlated with outcome. However, a comprehensive, quantitative, and interpretable predictor remains to be developed. METHODS: In this multi-center study, we included patients with resectable LUAD from four independent cohorts. An automated pipeline was designed for extracting texture features from the tumor region in hematoxylin and eosin (H&E)-stained whole slide images (WSIs) at multiple magnifications. A multi-scale pathology image texture signature (MPIS) was constructed with the discriminative texture features in terms of overall survival (OS) selected by the LASSO method. The prognostic value of MPIS for OS was evaluated through univariable and multivariable analysis in the discovery set (n = 111) and the three external validation sets (V1, n = 115; V2, n = 116; and V3, n = 246). We constructed a Cox proportional hazards model incorporating clinicopathological variables and MPIS to assess whether MPIS could improve prognostic stratification. We also performed histo-genomics analysis to explore the associations between texture features and biological pathways. RESULTS: A set of eight texture features was selected to construct MPIS. In multivariable analysis, a higher MPIS was associated with significantly worse OS in the discovery set (HR 5.32, 95%CI 1.72-16.44; P = 0.0037) and the three external validation sets (V1: HR 2.63, 95%CI 1.10-6.29, P = 0.0292; V2: HR 2.99, 95%CI 1.34-6.66, P = 0.0075; V3: HR 1.93, 95%CI 1.15-3.23, P = 0.0125). The model that integrated clinicopathological variables and MPIS had better discrimination for OS compared to the clinicopathological variables-based model in the discovery set (C-index, 0.837 vs. 0.798) and the three external validation sets (V1: 0.704 vs. 0.679; V2: 0.728 vs. 0.666; V3: 0.696 vs. 0.669). Furthermore, the identified texture features were associated with biological pathways, such as cytokine activity, structural constituent of cytoskeleton, and extracellular matrix structural constituent. CONCLUSIONS: MPIS was an independent prognostic biomarker that was robust and interpretable. Integration of MPIS with clinicopathological variables improved prognostic stratification in resectable LUAD and might help enhance the quality of individualized postoperative care.


Asunto(s)
Adenocarcinoma del Pulmón , Neoplasias Pulmonares , Humanos , Pronóstico , Estudios Retrospectivos , Modelos de Riesgos Proporcionales , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/cirugía
10.
iScience ; 25(12): 105605, 2022 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-36505920

RESUMEN

A high abundance of tumor-infiltrating lymphocytes (TILs) has a positive impact on the prognosis of patients with lung adenocarcinoma (LUAD). We aimed to develop and validate an artificial intelligence-driven pathological scoring system for assessing TILs on H&E-stained whole-slide images of LUAD. Deep learning-based methods were applied to calculate the densities of lymphocytes in cancer epithelium (DLCE) and cancer stroma (DLCS), and a risk score (WELL score) was built through linear weighting of DLCE and DLCS. Association between WELL score and patient outcome was explored in 793 patients with stage I-III LUAD in four cohorts. WELL score was an independent prognostic factor for overall survival and disease-free survival in the discovery cohort and validation cohorts. The prognostic prediction model-integrated WELL score demonstrated better discrimination performance than the clinicopathologic model in the four cohorts. This artificial intelligence-based workflow and scoring system could promote risk stratification for patients with resectable LUAD.

11.
EClinicalMedicine ; 52: 101562, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35928032

RESUMEN

Background: Early prediction of treatment response to neoadjuvant chemotherapy (NACT) in patients with human epidermal growth factor receptor 2 (HER2)-positive breast cancer can facilitate timely adjustment of treatment regimens. We aimed to develop and validate a Siamese multi-task network (SMTN) for predicting pathological complete response (pCR) based on longitudinal ultrasound images at the early stage of NACT. Methods: In this multicentre, retrospective cohort study, a total of 393 patients with biopsy-proven HER2-positive breast cancer were retrospectively enrolled from three hospitals in china between December 16, 2013 and March 05, 2021, and allocated into a training cohort and two external validation cohorts. Patients receiving full cycles of NACT and with surgical pathological results available were eligible for inclusion. The key exclusion criteria were missing ultrasound images and/or clinicopathological characteristics. The proposed SMTN consists of two subnetworks that could be joined at multiple layers, which allowed for the integration of multi-scale features and extraction of dynamic information from longitudinal ultrasound images before and after the first /second cycles of NACT. We constructed the clinical model as a baseline using multivariable logistic regression analysis. Then the performance of SMTN was evaluated and compared with the clinical model. Findings: The training cohort, comprising 215 patients, were selected from Yunnan Cancer Hospital. The two independent external validation cohorts, comprising 95 and 83 patients, were selected from Guangdong Provincial People's Hospital, and Shanxi Cancer Hospital, respectively. The SMTN yielded an area under the receiver operating characteristic curve (AUC) values of 0.986 (95% CI: 0.977-0.995), 0.902 (95%CI: 0.856-0.948), and 0.957 (95%CI: 0.924-0.990) in the training cohort and two external validation cohorts, respectively, which were significantly higher than that those of the clinical model (AUC: 0.524-0.588, P all < 0.05). The AUCs values of the SMTN within the anti-HER2 therapy subgroups were 0.833-0.972 in the two external validation cohorts. Moreover, 272 of 279 (97.5%) non-pCR patients (159 of 160 (99.4%), 53 of 54 (98.1%), and 60 of 65 (92.3%) in the training and two external validation cohorts, respectively) were successfully identified by the SMTN, suggesting that they could benefit from regime adjustment at the early-stage of NACT. Interpretation: The SMTN was able to predict pCR in the early-stage of NACT for HER2-positive breast cancer patients, which could guide clinicians in adjusting treatment regimes. Funding: Key-Area Research and Development Program of Guangdong Province (No.2021B0101420006); National Natural Science Foundation of China (No.82071892, 82171920); Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No.2022B1212010011); the National Science Foundation for Young Scientists of China (No.82102019, 82001986); Project Funded by China Postdoctoral Science Foundation (No.2020M682643); the Outstanding Youth Science Foundation of Yunnan Basic Research Project (202101AW070001); Scientific research fund project of Department of Education of Yunnan Province(2022J0249). Science and technology Projects in Guangzhou (202201020001;202201010513); High-level Hospital Construction Project (DFJH201805, DFJHBF202105).

12.
J Pers Med ; 11(7)2021 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-34357096

RESUMEN

Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.

13.
J Pers Med ; 11(6)2021 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-34208429

RESUMEN

Accurate segmentation of the mandible from cone-beam computed tomography (CBCT) scans is an important step for building a personalized 3D digital mandible model for maxillofacial surgery and orthodontic treatment planning because of the low radiation dose and short scanning duration. CBCT images, however, exhibit lower contrast and higher levels of noise and artifacts due to extremely low radiation in comparison with the conventional computed tomography (CT), which makes automatic mandible segmentation from CBCT data challenging. In this work, we propose a novel coarse-to-fine segmentation framework based on 3D convolutional neural network and recurrent SegUnet for mandible segmentation in CBCT scans. Specifically, the mandible segmentation is decomposed into two stages: localization of the mandible-like region by rough segmentation and further accurate segmentation of the mandible details. The method was evaluated using a dental CBCT dataset. In addition, we evaluated the proposed method and compared it with state-of-the-art methods in two CT datasets. The experiments indicate that the proposed algorithm can provide more accurate and robust segmentation results for different imaging techniques in comparison with the state-of-the-art models with respect to these three datasets.

14.
J Pers Med ; 11(5)2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-34062762

RESUMEN

Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance.

15.
J Pers Med ; 11(6)2021 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-34072714

RESUMEN

PURPOSE: Classic encoder-decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. METHODS: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. RESULTS: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. CONCLUSIONS: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.

16.
Phys Med Biol ; 64(17): 175020, 2019 09 05.
Artículo en Inglés | MEDLINE | ID: mdl-31239411

RESUMEN

Segmentation of mandibular bone in CT scans is crucial for 3D virtual surgical planning of craniofacial tumor resection and free flap reconstruction of the resection defect, in order to obtain a detailed surface representation of the bones. A major drawback of most existing mandibular segmentation methods is that they require a large amount of expert knowledge for manual or partially automatic segmentation. In fact, due to the lack of experienced doctors and experts, high quality expert knowledge is hard to achieve in practice. Furthermore, segmentation of mandibles in CT scans is influenced seriously by metal artifacts and large variations in their shape and size among individuals. In order to address these challenges we propose an automatic mandible segmentation approach in CT scans, which considers the continuum of anatomical structures through different planes. The approach adopts the architecture of the U-Net and then combines the resulting 2D segmentations from three orthogonal planes into a 3D segmentation. We implement such a segmentation approach on two head and neck datasets and then evaluate the performance. Experimental results show that our proposed approach for mandible segmentation in CT scans exhibits high accuracy.


Asunto(s)
Imagenología Tridimensional/métodos , Mandíbula/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...